certified defense
BagFlip: A Certified Defense Against Data Poisoning
Machine learning models are vulnerable to data-poisoning attacks, in which an attacker maliciously modifies the training set to change the prediction of a learned model. In a trigger-less attack, the attacker can modify the training set but not the test inputs, while in a backdoor attack the attacker can also modify test inputs. Existing model-agnostic defense approaches either cannot handle backdoor attacks or do not provide effective certificates (i.e., a proof of a defense). We present BagFlip, a model-agnostic certified approach that can effectively defend against both trigger-less and backdoor attacks. We evaluate BagFlip on image classification and malware detection datasets. BagFlip is equal to or more effective than the state-of-the-art approaches for trigger-less attacks and more effective than the state-of-the-art approaches for backdoor attacks.
Certified Defenses for Data Poisoning Attacks
Machine learning systems trained on user-provided data are susceptible to data poisoning attacks, whereby malicious users inject false training data with the aim of corrupting the learned model. While recent work has proposed a number of attacks and defenses, little is understood about the worst-case loss of a defense in the face of a determined attacker. We address this by constructing approximate upper bounds on the loss across a broad family of attacks, for defenders that first perform outlier removal followed by empirical risk minimization. Our approximation relies on two assumptions: (1) that the dataset is large enough for statistical concentration between train and test error to hold, and (2) that outliers within the clean (non-poisoned) data do not have a strong effect on the model. Our bound comes paired with a candidate attack that often nearly matches the upper bound, giving us a powerful tool for quickly assessing defenses on a given dataset. Empirically, we find that even under a simple defense, the MNIST-1-7 and Dogfish datasets are resilient to attack, while in contrast the IMDB sentiment dataset can be driven from 12% to 23% test error by adding only 3% poisoned data.
Supplementary Material for Certified Defense to Image Transformations via Randomized Smoothing A Proof of Theorem 3.2
We now proceed to proof Theorem 3.2. Next we show that Eq. (15) holds. Does there exist a t such that both upper bound coincide? We now show Theorem 3.2 (restarted below): Setting Theorem 3.2 up to the last sentence, which in turn is a direct consequence of Lemma 2. Theorem In this section, we elaborate on the details of Step 2 in Section 6. Because we don't have any constraints for the pixel values Here, we present the algorithm used to compute the inverse of a transformation.
BagFlip: A Certified Defense Against Data Poisoning
Machine learning models are vulnerable to data-poisoning attacks, in which an attacker maliciously modifies the training set to change the prediction of a learned model. In a trigger-less attack, the attacker can modify the training set but not the test inputs, while in a backdoor attack the attacker can also modify test inputs. Existing model-agnostic defense approaches either cannot handle backdoor attacks or do not provide effective certificates (i.e., a proof of a defense). We present BagFlip, a model-agnostic certified approach that can effectively defend against both trigger-less and backdoor attacks. We evaluate BagFlip on image classification and malware detection datasets.
Diffusion Denoising as a Certified Defense against Clean-label Poisoning
Hong, Sanghyun, Carlini, Nicholas, Kurakin, Alexey
We present a certified defense to clean-label poisoning attacks. These attacks work by injecting a small number of poisoning samples (e.g., 1%) that contain $p$-norm bounded adversarial perturbations into the training data to induce a targeted misclassification of a test-time input. Inspired by the adversarial robustness achieved by $denoised$ $smoothing$, we show how an off-the-shelf diffusion model can sanitize the tampered training data. We extensively test our defense against seven clean-label poisoning attacks and reduce their attack success to 0-16% with only a negligible drop in the test time accuracy. We compare our defense with existing countermeasures against clean-label poisoning, showing that the defense reduces the attack success the most and offers the best model utility. Our results highlight the need for future work on developing stronger clean-label attacks and using our certified yet practical defense as a strong baseline to evaluate these attacks.
- North America > United States > Oregon (0.04)
- North America > United States > Maryland > Baltimore (0.04)
- Asia > Middle East > Jordan (0.04)
Certified Defenses: Why Tighter Relaxations May Hurt Training?
Jovanović, Nikola, Balunović, Mislav, Baader, Maximilian, Vechev, Martin
Certified defenses based on convex relaxations are an established technique for training provably robust models. The key component is the choice of relaxation, varying from simple intervals to tight polyhedra. Paradoxically, however, it was empirically observed that training with tighter relaxations can worsen certified robustness. While several methods were designed to partially mitigate this issue, the underlying causes are poorly understood. In this work we investigate the above phenomenon and show that tightness may not be the determining factor for reduced certified robustness. Concretely, we identify two key features of relaxations that impact training dynamics: continuity and sensitivity. We then experimentally demonstrate that these two factors explain the drop in certified robustness when using popular relaxations. Further, we show, for the first time, that it is possible to successfully train with tighter relaxations (i.e., triangle), a result supported by our two properties. Overall, we believe the insights of this work can help drive the systematic discovery of new effective certified defenses.
Certified Defenses for Data Poisoning Attacks
Steinhardt, Jacob, Koh, Pang Wei W., Liang, Percy S.
Machine learning systems trained on user-provided data are susceptible to data poisoning attacks, whereby malicious users inject false training data with the aim of corrupting the learned model. While recent work has proposed a number of attacks and defenses, little is understood about the worst-case loss of a defense in the face of a determined attacker. We address this by constructing approximate upper bounds on the loss across a broad family of attacks, for defenders that first perform outlier removal followed by empirical risk minimization. Our approximation relies on two assumptions: (1) that the dataset is large enough for statistical concentration between train and test error to hold, and (2) that outliers within the clean (non-poisoned) data do not have a strong effect on the model. Our bound comes paired with a candidate attack that often nearly matches the upper bound, giving us a powerful tool for quickly assessing defenses on a given dataset.
Lightweight Lipschitz Margin Training for Certified Defense against Adversarial Examples
Ono, Hajime, Takahashi, Tsubasa, Kakizaki, Kazuya
How can we make machine learning provably robust against adversarial examples in a scalable way? Since certified defense methods, which ensure $\epsilon$-robust, consume huge resources, they can only achieve small degree of robustness in practice. Lipschitz margin training (LMT) is a scalable certified defense, but it can also only achieve small robustness due to over-regularization. How can we make certified defense more efficiently? We present LC-LMT, a light weight Lipschitz margin training which solves the above problem. Our method has the following properties; (a) efficient: it can achieve $\epsilon$-robustness at early epoch, and (b) robust: it has a potential to get higher robustness than LMT. In the evaluation, we demonstrate the benefits of the proposed method. LC-LMT can achieve required robustness more than 30 epoch earlier than LMT in MNIST, and shows more than 90 $\%$ accuracy against both legitimate and adversarial inputs.